11 research outputs found

    A multichannel wireless sEMG sensor endowing a 0.13 ÎĽm CMOS mixed-signal SoC

    Get PDF
    This paper presents a wireless multichannel surface electromyography (sEMG) sensor which features a custom 0.13μm CMOS mixed-signal system-on-chip (SoC) analog frontend circuit. The proposed sensor includes 10 sEMG recording channels with tunable bandwidth (BW) and analog-to-digital converter (ADC) resolution. The SoC includes 10x bioamplifiers, 10x 3 rd order ΔΣ MASH 1-1-1 ADC, and 10x on-chip decimation filters (DF). This SoC provides the sEMG samples data through a serial peripheral interface (SPI) bus to a microcontroller unit (MCU) that then transfers the data to a wireless transceiver. We report sEMG waveforms acquired using a custom multichannel electrode module, and a comparison with a commercial grade system. Results show that the proposed integrated wireless SoC-based system compares well with the commercial grade sEMG recording system. The sensor has an input-referred noise of 2.5 μVrms (BW of 10-500 Hz), an input-dynamic range of 6 mVpp, a programmable sampling rate of 2 ksps, for sEMG, while consuming only 7.1 μW/Ch for the SoC (w/ ADC & DF) and 21.8 mW of power for the sensor (Transceiver, MCU, etc.). The system lies on a 1.5 × 2.0 cm 2 printed circuit board and weights <; 1 g

    Intuitive wireless control of a robotic arm for people living with an upper body disability

    Get PDF
    Assistive Technologies (ATs) also called extrinsic enablers are useful tools for people living with various disabilities. The key points when designing such useful devices not only concern their intended goal, but also the most suitable human-machine interface (HMI) that should be provided to users. This paper describes the design of a highly intuitive wireless controller for people living with upper body disabilities with a residual or complete control of their neck and their shoulders. Tested with JACO, a six-degree-of-freedom (6-DOF) assistive robotic arm with 3 flexible fingers on its end-effector, the system described in this article is made of low-cost commercial off-the-shelf components and allows a full emulation of JACO's standard controller, a 3 axis joystick with 7 user buttons. To do so, three nine-degree-of-freedom (9-DOF) inertial measurement units (IMUs) are connected to a microcontroller and help measuring the user's head and shoulders position, using a complementary filter approach. The results are then transmitted to a base-station via a 2.4-GHz low-power wireless transceiver and interpreted by the control algorithm running on a PC host. A dedicated software interface allows the user to quickly calibrate the controller, and translates the information into suitable commands for JACO. The proposed controller is thoroughly described, from the electronic design to implemented algorithms and user interfaces. Its performance and future improvements are discussed as well

    Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning

    Get PDF
    In recent years, deep learning algorithms have become increasingly more prominent for their unparalleled ability to automatically learn discriminant features from large amounts of data. However, within the field of electromyography-based gesture recognition, deep learning algorithms are seldom employed as they require an unreasonable amount of effort from a single person, to generate tens of thousands of examples. This work's hypothesis is that general, informative features can be learned from the large amounts of data generated by aggregating the signals of multiple users, thus reducing the recording burden while enhancing gesture recognition. Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets. Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants. Three different deep learning networks employing three different modalities as input (raw EMG, Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset. The proposed transfer learning scheme is shown to systematically and significantly enhance the performance for all three networks on the two datasets, achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet. Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows users to adapt their muscle activation strategy which reduces the degradation in accuracy normally experienced over time.Comment: Source code and datasets available: https://github.com/Giguelingueling/MyoArmbandDatase

    A wireless sEMG-based body-machine interface for assistive technology devices

    Get PDF
    Assistive technology (AT) tools and appliances are being more and more widely used and developed worldwide to improve the autonomy of people living with disabilities and ease the interaction with their environment. This paper describes an intuitive and wireless surface electromyography (sEMG) based body-machine interface for AT tools. Spinal cord injuries at C5-C8 levels affect patients' arms, forearms, hands, and fingers control. Thus, using classical AT control interfaces (keypads, joysticks, etc.) is often difficult or impossible. The proposed system reads the AT users' residual functional capacities through their sEMG activity, and converts them into appropriate commands using a threshold-based control algorithm. It has proven to be suitable as a control alternative for assistive devices and has been tested with the JACO arm, an articulated assistive device of which the vocation is to help people living with upper-body disabilities in their daily life activities. The wireless prototype, the architecture of which is based on a 3-channel sEMG measurement system and a 915-MHz wireless transceiver built around a low-power microcontroller, uses low-cost off-the-shelf commercial components. The embedded controller is compared with JACO's regular joystick-based interface, using combinations of forearm, pectoral, masseter, and trapeze muscles. The measured index of performance values is 0.88, 0.51, and 0.41 bits/s, respectively, for correlation coefficients with the Fitt's model of 0.75, 0.85, and 0.67. These results demonstrate that the proposed controller offers an attractive alternative to conventional interfaces, such as joystick devices, for upper-body disabled people using ATs such as JACO

    A wireless, wearable and multimodal body-machine interface design framework for individuals with severe disabilities

    Get PDF
    Les technologies d’assistance jouent un rôle important dans le quotidien des personnes en situation de handicap, notamment dans l’amélioration de leur autonomie de vie. Certaines d’entre elles à l’exemple des fauteuils roulants motorisés et autres bras robotiques articulés, requièrent une interaction au moyen d’interfaces dédiées capables de traduire l’intention de l’utilisateur. Les individus dont le niveau de paralysie confère de bonnes capacités résiduelles arrivent à interagir avec les interfaces de contrôle nécessitant une intervention mécanique et un certain niveau de dextérité (joysitck, boutons de contrôle, clavier, switch, souris etc). Cependant, certains types de handicap (lésion ou malformation médullaire, paralysie cérébrale, traumatismes à la suite d’un accident, absence congénital de membres du haut du corps, etc) peuvent entraîner une perte d’autonomie des doigts, des avant-bras ou des bras, rendant impossibles le recours à ces outils de contrôle. Dès lors, il est essentiel de concevoir des solutions alternatives capables de pallier ce manque. Partout à travers le monde, des chercheurs réalisent des prouesses technologiques et conçoivent des interfaces corps-machine adaptés à des handicaps spécifiques. Certaines d’entre elles utilisent les signaux bio-physiologiques (électromyoraphie (EMG), électroencéphalographie (EEG), électrocorticographie (ECoG), électrocculographie (EOG)), image de l’activité musculaire, cérébrale et occulographique du corps humain, qu’elles traduisent en moyens de contrôle relatifs à un mouvement, une intention formulée par le cerveau, etc. D’autres se servent de capteurs d’image capable d’être robuste dans un environnement donné, pour traduire la direction du regard, la position de la tête ou l’expression faciale en vecteurs de contrôle. Bien qu’ayant fait montre de leur efficacité, ces techniques sont parfois trop coûteuses, sensibles à l’environnement dans lequel elles sont utilisés, peuvent être encombrantes et contre-intuitives, ce qui explique le gap existant entre les besoins réels, la recherche et le marché des solutions de contrôle alternatives disponibles. Ce projet propose un “framework” embarqué, permettant d’interfacer différents types de modalités de mesure des capacités résiduelles des patients, au sein d’un réseau de capteurs corporels sans-fils multimodales. Son architecture est conçue de façon à pouvoir prendre en compte différents types de handicap et octroyer une large gamme de scénarios de contrôle pouvant s’adapter facilement aux aptitudes physiques de son utilisateur. In fine, l’interface corps-machine que ce projet propose, explore des approches nouvelles, se voulant pallier aux limitations des solutions existantes et favoriser l’autonomie de vie des individus en situation de handicap.Assistive technologies play an important role in the day-to-day lives of people with disabilities, in particular in improving their autonomy. Tools such as motorized wheelchairs, assistive robotic arms, etc, require interaction through dedicated interfaces capable of translating the user’s intention. Individuals whose level of paralysis allows good residual abilities can interact with control interfaces requiring mechanical intervention and a good level of dexterity (joysitck, control buttons, keyboard, switch, mouse etc). However, certain types of disability (spinal cord injury or malformation, cerebral palsy, trauma following an accident, congenital absence of upper body limbs, etc.) may result in loss of autonomy of the fingers, forearms or arms, making it impossible to use these control tools. Therefore, it is essential to design alternative solutions to overcome this lack. Researchers around the world are realizing technological prowess by designing body-machine interfaces adapted to specific handicaps. Some of them use bio-physiological signals (electromyography (EMG), electroencephalography (EEG), electrocorticography (ECoG), electrocculography (EOG)), images of the muscular, cerebral and occulographic activity of the human body, for translation into means of control with respect to a movement, an intention formulated by the brain, etc. Others use image sensors that can be robust in a given environment, to translate the direction of gaze, the position of the head, or facials expressions, into control vectors. Although they were proved to be efficient, these techniques are sometimes too expensive, sensitive to the environment in which they are used, can be cumbersome and counter-intuitive, which explains the gap between existing needs, research and development, and the market of commercially available alternative control solutions. This project provides a "framework" that enables the design of flexible, modular, adaptive and wearable body-machine interfaces for the severely disabled. The proposed system, which implements a wireless and multimodal body sensor network, allows for different types of modalities the measure the residual capacities of individuals with severe disabilities for translation into intuitive control commands. The architecture is designed to accommodate different types of disabilities and provide a wide range of control scenarios. Ultimately, the body-machine interface that this project proposes, explores new approaches, aiming to overcome the limitations of existing solutions and to promote the autonomy of life of individuals with disabilities

    Transfer learning for sEMG hand gesture recognition using convolutional neural networks

    Get PDF
    In the realm of surface electromyography (sEMG) gesture recognition, deep learning algorithms are seldom employed. This is due in part to the large quantity of data required for them to train on. Consequently, it would be prohibitively time consuming for a single user to generate a sufficient amount of data for training such algorithms. In this paper, two datasets of 18 and 17 able-bodied participants respectively are recorded using a low-cost, low-sampling rate (200Hz), 8-channel, consumer-grade, dry electrode sEMG device named Myo armband (Thalmic Labs). A convolutional neural network (CNN) is augmented using transfer learning techniques to leverage inter-user data from the first dataset and alleviate the data generation burden imposed on a single individual. The results show that the proposed classifier is robust and precise enough to guide a 6DoF robotic arm (in conjunction with orientation data) with the same speed and precision as with a joystick. Furthermore, the proposed CNN achieves an average accuracy of 97.81% on seven hand/wrist gestures on the 17 participants of the second dataset

    Real-time hand motion recognition using sEMG Patterns Classification

    No full text
    Increasing performance while decreasing the cost of sEMG prostheses is an important milestone in rehabilitation engineering. The different types of prosthetic hands that are currently available to patients worldwide can benefit from more effective and intuitive control. This paper presents a real -time approach to classify finger motions based on surface electromyography (sEMG) signals. A multichannel signal acquisition platform implemented using components of f the shelf is use d to record forearm sEMG signals from 7 channels. sEMG pattern classification is performed in real time, using a Linear Discriminant Analysis approach. Thirteen hand motions can be successfully identified with an accuracy of up to 95. 8% and of 92. 7% on average for 8 participants, with an updated prediction every 192 ms

    A multimodal adaptive wireless control interface for people with upper-body disabilities

    No full text
    This paper presents a new multimodal control interface for people living with upper-body disabilities based on a wearable wireless sensor network. The proposed body-machine interface is modular and can be easily adapted to the residual functional capacities (RFCs) of different users. A custom data fusion algorithm has been developed for emulating a joystick control using head motion measured with a lightweight wireless inertial sensor enclosed in a headset. The wearable network can include up to six modular sensor nodes which can be used simultaneously to read different RFCs including gesture and muscular activity, and translate them into commands. Sensor data fusion is performed inside the sensor nodes in order to free the wireless link and the base station, and decrease power consumption. Requirements of such an interface are established for people using powered-wheelchairs, and a proof of concept system is implemented and used to control an assistive robotic arm. It is shown that the performance of the system compares well to conventional control systems like the joystick controller, while being potentially more suitable for the severely disabled
    corecore